75 research outputs found

    A Mobile-Based Group Quiz System to Promote Collaborative Learning and Facilitate Instant Feedback

    No full text
    In this paper we develop and evaluate a mobile-based questioning-answering system (MQAS) that complements traditional learning which can be used as a tool to encourage teachers to give their students mobile-based weekly group quizzes. These quizzes can provide teachers with valid information about the progress of their students and can also motivate students to work in a collaborative manner in order to facilitate the integration of their knowledge. We describe the architecture and experiences with the system

    Energy-Aware Streaming Multimedia Adaptation: An Educational Perspective

    No full text
    As mobile devices are getting more powerful and more affordable the use of online educational multimedia is also getting very prevalent. Limited battery power is nevertheless, a major restricting factor as streaming multimedia drains battery power quickly. Many battery efficient multimedia adaptation techniques have been proposed that achieve battery efficiency by lowering presentation quality of entire multimedia. Adaptation is usually done without considering any impact on the information contents of multimedia. In this paper, based on the results of an experimental study, we argue that without considering any negative impact on information contents of multimedia the adaptation may negatively impact the learning process. Some portions of the multimedia that require a higher visual quality for conveying learning information may lose their learning effectiveness in the adapted lowered quality. We report results of our experimental study that indicate that different parts of the same learning multimedia do not have same minimum acceptable quality. This strengthens the position that power-saving adaptation techniques for educational multimedia must be developed that lower the quality of multimedia based on the needs of its individual fragments for successfully conveying learning informatio

    Measuring Social Media Activity of Scientific Literature: An Exhaustive Comparison of Scopus and Novel Altmetrics Big Data

    Full text link
    This paper measures social media activity of 15 broad scientific disciplines indexed in Scopus database using Altmetric.com data. First, the presence of Altmetric.com data in Scopus database is investigated, overall and across disciplines. Second, the correlation between the bibliometric and altmetric indices is examined using Spearman correlation. Third, a zero-truncated negative binomial model is used to determine the association of various factors with increasing or decreasing citations. Lastly, the effectiveness of altmetric indices to identify publications with high citation impact is comprehensively evaluated by deploying Area Under the Curve (AUC) - an application of receiver operating characteristic. Results indicate a rapid increase in the presence of Altmetric.com data in Scopus database from 10.19% in 2011 to 20.46% in 2015. A zero-truncated negative binomial model is implemented to measure the extent to which different bibliometric and altmetric factors contribute to citation counts. Blog count appears to be the most important factor increasing the number of citations by 38.6% in the field of Health Professions and Nursing, followed by Twitter count increasing the number of citations by 8% in the field of Physics and Astronomy. Interestingly, both Blog count and Twitter count always show positive increase in the number of citations across all fields. While there was a positive weak correlation between bibliometric and altmetric indices, the results show that altmetric indices can be a good indicator to discriminate highly cited publications, with an encouragingly AUC= 0.725 between highly cited publications and total altmetric count. Overall, findings suggest that altmetrics could better distinguish highly cited publications.Comment: 34 Pages, 3 Figures, 15 Table

    Gaussian mixture model based probabilistic modeling of images for medical image segmentation

    Get PDF
    In this paper, we propose a novel image segmentation algorithm that is based on the probability distributions of the object and background. It uses the variational level sets formulation with a novel region based term in addition to the edge-based term giving a complementary functional, that can potentially result in a robust segmentation of the images. The main theme of the method is that in most of the medical imaging scenarios, the objects are characterized by some typical characteristics such a color, texture, etc. Consequently, an image can be modeled as a Gaussian mixture of distributions corresponding to the object and background. During the procedure of curve evolution, a novel term is incorporated in the segmentation framework which is based on the maximization of the distance between the GMM corresponding to the object and background. The maximization of this distance using differential calculus potentially leads to the desired segmentation results. The proposed method has been used for segmenting images from three distinct imaging modalities i.e. magnetic resonance imaging (MRI), dermoscopy and chromoendoscopy. Experiments show the effectiveness of the proposed method giving better qualitative and quantitative results when compared with the current state-of-the-art. INDEX TERMS Gaussian Mixture Model, Level Sets, Active Contours, Biomedical Engineerin

    Self-medication among medical student in King Abdul-Aziz University

    Get PDF
    Background: A huge number of medications are used without prescriptions which make us face a real problem which is the overuse of medications. Medication overuse dose come with physical, mental and emotional abnormalities. The objective of the study was to investigate the irrational uses of these medications which are NSAIDS, paracetamol, antibiotic antihistamines, opioids, and anti-anxiety drugs among medical students in KAU.Methods: We conducted a descriptive cross sectional survey of 507 students enrolled at medical college of King Abdul-Aziz university in Jeddah 2015. The two steps stratified random sampling was used to collect the data. The questionnaire includes a socio-demographic information and data about using any of the following medication as anti-anxiety, antibiotics, paracetamol, opioids, (NSAIDs), and anti-histamine. The data entry and analysis was done by SPSS software package version 20.Results: Paracetamol were the most frequently 117 (23.1%) drug uses by medical students, followed by antihistaminic 48 (9.5%), antibiotic 33 (6.5%), NSAIDS 22 (4.3%), anti- anxiety 7 (1.4%) and opioid 4 (0.8%). Most of them were self-medication (74%). Relief fever was the most common cause for seeking self-medication reported by medical student 103 (20.4%), most frequent side effects was nausea and vomiting 47 (9.3%)  Conclusions: There is an increase of self-medication in medical students of KAU especially paracetamol and NSAIDs use. We suggest increasing studies on the local irrational use of medications and increasing awareness on the importance of prescribed medications.

    Negation and Speculation in NLP: A Survey, Corpora, Methods, and Applications

    Get PDF
    Negation and speculation are universal linguistic phenomena that affect the performance of Natural Language Processing (NLP) applications, such as those for opinion mining and information retrieval, especially in biomedical data. In this article, we review the corpora annotated with negation and speculation in various natural languages and domains. Furthermore, we discuss the ongoing research into recent rule-based, supervised, and transfer learning techniques for the detection of negating and speculative content. Many English corpora for various domains are now annotated with negation and speculation; moreover, the availability of annotated corpora in other languages has started to increase. However, this growth is insufficient to address these important phenomena in languages with limited resources. The use of cross-lingual models and translation of the well-known languages are acceptable alternatives. We also highlight the lack of consistent annotation guidelines and the shortcomings of the existing techniques, and suggest alternatives that may speed up progress in this research direction. Adding more syntactic features may alleviate the limitations of the existing techniques, such as cue ambiguity and detecting the discontinuous scopes. In some NLP applications, inclusion of a system that is negation- and speculation-aware improves performance, yet this aspect is still not addressed or considered an essential step

    Predicting Academic Performance of Students from VLE Big Data using Deep Learning Models

    Get PDF
    The abundance of accessible educational data, supported by the technology-enhanced learning platforms, provides opportunities to mine learning behavior of students, addressing their issues, optimizing the educational environment, and enabling data-driven decision making. Virtual learning environments complement the learning analytics paradigm by effectively providing datasets for analysing and reporting the learning process of students and its reflection and contribution in their respective performances. This study deploys a deep artificial neural network on a set of unique handcrafted features, extracted from the virtual learning environments clickstream data, to predict at-risk students providing measures for early intervention of such cases. The results show the proposed model to achieve a classification accuracy of 84%-93%. We show that a deep artificial neural network outperforms the baseline logistic regression and support vector machine models. While logistic regression achieves an accuracy of 79.82% - 85.60%, the support vector machine achieves 79.95% - 89.14%. Aligned with the existing studies - our findings demonstrate the inclusion of legacy data and assessment-related data to impact the model significantly. Students interested in accessing the content of the previous lectures are observed to demonstrate better performance. The study intends to assist institutes in formulating a necessary framework for pedagogical support, facilitating higher education decision-making process towards sustainable education

    Demonstrating and negotiating the adoption of web design technologies : Cascading Style Sheets and the CSS Zen Garden

    Get PDF
    Cascading Style Sheets (CSS) express the visual design of a website through code and remain an understudied area of web history. Although CSS was proposed as a method of adding a design layer to HTML documents early on in the development of the web, they only crossed from a marginal position to mainstream usage after a long period of proselytising by web designers working towards “web standards”. The CSS Zen Garden grassroots initiative aimed at negotiating, mainstreaming and archiving possible methods of CSS web design, while dealing with varying levels of browser support for the technology. Using the source code of the CSS Zen Garden and the accompanying book, this paper demonstrates that while the visual designs were complex and sophisticated, the CSS lived within an ecosystem of related platforms, i.e., web browsers, screen sizes and design software, which constrained its use and required enormous sensitivity to the possibilities browser ecosystems could reliably provide. As the CSS Zen Garden was maintained for over ten years, it also acts as a unique site to trace the continuing development of web design, and the imaginaries expressed in the Zen Garden can also be related to ethical dimensions that influence the process of web design. Compared to Flash-based web design, work implemented using CSS required a greater willingness to negotiate source code configurations between browser platforms. Following the history of the individuals responsible for creating and contributing to the CSS Zen Garden shows the continuing influence of layer-based metaphors of design separated from content within web source code

    HTSS: A novel hybrid text summarisation and simplification architecture

    Get PDF
    Text simplification and text summarisation are related, but different sub-tasks in Natural Language Generation. Whereas summarisation attempts to reduce the length of a document, whilst keeping the original meaning, simplification attempts to reduce the complexity of a document. In this work, we combine both tasks of summarisation and simplification using a novel hybrid architecture of abstractive and extractive summarisation called HTSS. We extend the well-known pointer generator model for the combined task of summarisation and simplification. We have collected our parallel corpus from the simplified summaries written by domain experts published on the science news website EurekaAlert (www.eurekalert.org). Our results show that our proposed HTSS model outperforms neural text simplification (NTS) on SARI score and abstractive text summarisation (ATS) on the ROUGE score. We further introduce a new metric (CSS1) which combines SARI and Rouge and demonstrates that our proposed HTSS model outperforms NTS and ATS on the joint task of simplification and summarisation by 38.94% and 53.40%, respectively. We provide all code, models and corpora to the scientific community for future research at the following URL: https://github.com/slab-itu/HTSS/

    Webometrics: evolution of social media presence of universities

    Get PDF
    This is an accepted manuscript of an article published by Springer in Scientometrics on 03/01/2021, available online: https://doi.org/10.1007/s11192-020-03804-y The accepted version of the publication may differ from the final published version.This paper aims at an important task of computing the webometrics university ranking and investigating if there exists a correlation between webometrics university ranking and the rankings provided by the world prominent university rankers such as QS world university ranking, for the time period of 2005–2016. However, the webometrics portal provides the required data for the recent years only, starting from 2012, which is insufficient for such an investigation. The rest of the required data can be obtained from the internet archive. However, the existing data extraction tools are incapable of extracting the required data from internet archive, due to unusual link structure that consists of web archive link, year, date, and target links. We developed an internet archive scrapper and extract the required data, for the time period of 2012–2016. After extracting the data, the webometrics indicators were quantified, and the universities were ranked accordingly. We used correlation coefficient to identify the relationship between webometrics university ranking computed by us and the original webometrics university ranking, using the spearman and pearson correlation measures. Our findings indicate a strong correlation between ours and the webometrics university rankings, which proves that the applied methodology can be used to compute the webometrics university ranking of those years for which the ranking is not available, i.e., from 2005 to 2011. We compute the webometrics ranking of the top 30 universities of North America, Europe and Asia for the time period of 2005–2016. Our findings indicate a positive correlation for North American and European universities, but weak correlation for Asian universities. This can be explained by the fact that Asian universities did not pay much attention to their websites as compared to the North American and European universities. The overall results reveal the fact that North American and European universities are higher in rank as compared to Asian universities. To the best of our knowledge, such an investigation has been executed for the very first time by us and no recorded work resembling this has been done before.Published onlin
    • …
    corecore